Review of the Coursera Human Computer Interaction Course

I’ve now finished the Coursera Human Computer Interaction Course. As I await my final grade, I reflect on my experience, on the statistics on student numbers, and on how the platform can develop.

It’s been a wonderfully wide-ranging course, spanning the whole design process. Participants have learned needfinding and observation techniques, how to carry out rapid prototyping, principles for effective interface and visual design, and a repertoire of strategies for evaluating interfaces. The scope has been a real delight and Scott Klemmer’s lectures have been brilliant.

 

Today Scott Klemmer, in his concluding message to class participants, shared some statistics on how people have been engaging with the course:

  • 29,105 students watched video(s)
  • 6,853 submitted quiz(zes)
  • 2,470 completed an assignment
  • 791 completed all 5 assignments

(Assuming that anyone who submitted a quiz or assignment also watched videos:)
Of the number of people who watched video(s):

23.55% submitted one or more quizzes

8.48% submitted one or more assignments

2.72% completed all 5 assignments (and presumably all quizzes as well, as the final course grade for people doing assignments is also determined in part by their quiz score.)

 

The course offered two different levels of accomplishment – you could either watch the videos and do the weekly review quiz (short multiple choice exercises), or you could do this and work on a design project. The former was the ‘basic’ track; the latter was the ‘studio’ track. I chose the latter, as I wanted to learn as much as I could. We now know that 2.72% of those who watched one or more lectures took the time-intensive design assignments through to completion.

I admire the way that the course tried to accommodate for people who could only commit to following the lectures, whilst allowing those with the time and inclination to get stuck in to a full project.

 

The quizzes were quite easy and quick to complete. I would be interested to find out how many people watched and submitted all videos and quizzes – in other words, how many people followed the ‘basic’ track through to completion. Was the provision of this less time-hungry option a success? What was the attrition rate over the course of the class for people on the ‘basic’ track?

 

Whilst Scott’s videos were excellent, the assignments were the highlight of the course. The assignments allowed participants to put into practice what was discussed in the lectures and tested in the quizzes each week. They were very time consuming (I spent in the region of 15-20 hours a week on mine) but absolutely worthwhile. I liked the way that each of us worked on a project for the duration of the course, going through the entire design process and coming out with a completed prototype.

 

Online peer assessment was essential in making the assignments work – it would have been impractical to grade them in any other way, given the scale of the course (not to mention the fact that it was not paid for by participants). And being involved in peer assessment enhanced the learning experience. (The pedagogy section of the Coursera website outlines the literature on peer assessment.) As I noted in a comment on a blog on peer grading in online classes, it was highly educative to see other people’s assignments, as this has been such a creative course.

The assessment rubrics were mostly very clear, and they were improved a few weeks into the course when ‘in-between’ marks were introduced. This allowed markers to accommodate for people who were better than one standard, but who had not yet reached the next level of accomplishment. eg:

Before:
No performance – 0
Poor performance – 1
Basic performance – 2
Good performance – 3

After:
No performance – 0
Poor performance – 1
Basic performance – 3
Good performance – 5

There were many other tweaks to the platform over the course, as we all got to grips with this experiment.

 

This course was a first foray into mass online peer assessment. Whilst other online courses have used computer grading to deliver scalable assessment, this wasn’t suitable here. Computer science and mathematics lend themselves to more mechanistic grading processes. One can quite easily devise a method to test whether a given program processes input correctly. But assessing design requires a more holistic,  subjective and qualitative evaluation method. So the course drew upon the grading rubric used in Scott’s Stanford class and utilised peer evaulation. Scott confesses that “we had no idea how this would work online.”

The peer grading process was certainly not anarchic. Before being let loose on your peers, you had to evaluate some control pieces of work. You’d then see how close you were to Scott Klemmer’s own assessment of that work. Once you got good enough (I think you were deemed to be ready when you’d performed 3 good evaluations) you were set to grade 5 of your peers. This trial grading system felt quite effective.

By grading other people’s work before you went back to grade your own, you were encouraged to take a more humble view, and hopefully one that was more objective. But Scott did note that “there was a real variation in the effort, standards and interpretation or rubric.” I wonder if the quality of a user’s evaluations could be monitored by comparing them to the mean grade given to a given piece of work by other evaluators. In reflecting on the assessment process, Scott observed that we need to figure out how to give people richer qualitative feedback. I think that these issues could be addressed hand-in-hand with measures to improve the timings of the course (and reduce attrition and dropouts) and the discussion element of the course.

 

I do think that the weekly workload could be improved. For those doing the studio track, the workload was quite punishing – particularly for those with work and/or family commitments. I spent around 20-25 hours a week on the course.

In any given week in the HCI course, learners had to carry out peer assessment and work on their own assignment (in addition to viewing the lectures and doing the week’s quiz). This meant that assessment and assignment competed with each other for attention. And there was no real formal incentive to lavish attention on the peer assessment. In his concluding remarks, Scott Klemmer stated that in future there would be more time between assignments.

I would also want to see a way of rewarding good evaluators. Good feedback could be incentivised. Perhaps each participant could state which feedback they found most helpful after each round of assessment, and the person who provided it could receive a little extra credit.

I think the course experience could be improved if there were alternating weeks of creation and assessment. This could encourage deeper reflection – and could be used to drive reflective peer discussion. This would also target criticism that Coursera’s Massive Open Online Courses (MOOCs) have not been sufficiently discursive, and that they are still more like one-way lectures. The creation, sharing and discussion of project work in this course has, of course, already undermined this criticism.

Scott was impressed by the way that the community drove much of the learning, with people sharing interesting interfaces, articles and other resources. (For example, some participants collated a reading list, drawing upon all the resources mentioned in the lectures; many set up study groups; others performed extra peer assessments voluntarily, and many more articulated or answered people’s questions and concerns in the forums.)

Doing more to foster the communal learning aspect of the course in a more targeted and deliberate fashion would further enhance the experience.

 

The clarity of assignments is ripe for improvement. Assignment requirements were not always initially clear, and the overall development of the course and destination of the design project was not clear at the start. (Maybe this helped avoid putting some people off by concealing the workload!) Over the course, these details have been hammered out, ensuring that next time through there will be more clarity in the assignment wording, with explanatory examples, and a clear roadmap of project work and deadlines.

Whilst sometimes the confusion was frustrating, it never dampened my excitement at being in the first cohort to try out this course. It felt like everyone involved in the process  was learning, including the teachers, so I didn’t mind the rough edges (particularly as the course was so good, and completely free). That was pretty cool.

 

This has been a fantastic course, and I’m still in awe of the fact that it was available for free. I’ll finish by mirroring Scott’s observation: “seeing the online education space really blossom gives me a lot of hope for the future.” I’m excited by the online educational space that has been emerging in force over the last year, and am already plotting my next courses. I’m half-way through Power Searching with Google, and have signed up for Udacity’s CS101.

I’d like to extend my sincerest thanks to Scott Klemmer and the team at Stanford and Coursera who made this course happen. I’ll do my best to use what I’ve learned, to continue improving, and to help others do the same.

This entry was posted in Uncategorized. Bookmark the permalink.

10 Responses to Review of the Coursera Human Computer Interaction Course

  1. Tom says:

    What was lecture 5? The reading list did not have suggested books for lecture 5.

  2. Susan says:

    I liked reading your review. If you’re curious, I was one of the people who did the basic track only. I 1.) graduated college, 2.) moved out of state and 3.) started a new job right in the middle of this course, so it seemed more practical.

    I was jealous of the people who did the assignment because I felt like there was no way for the amateur trackers to engage in the discussions because the forum was mostly focused on the assignment, so it became a lonely experience. I wish that 1.) We could maybe participate in the peer review (would that be fair? or maybe only if a student already had 5 peer reviews) or 2.) had some sort of video-lecture sub-forum since the discussion board was almost entirely about the assignment or bug reports or 3.) We had a way to see the projects people were completing (we sort of could later on). I imagine completing the assignment obviously results in a richer experience, but seeing the assignments people were doing probably would help the amateur trackers understand the application of the concepts in the video.

    However, for what it was, I really enjoyed the videos. I’m glad I stuck with it and feel like I did learn something (I am not a programmer, but I have friends who are and things from class have randomly come up in discussion). I don’t foresee a lull in my schedule in the future, but if I had one, I’d retake it and do the full studio track.

    • Thanks for your comment, Susan.

      I certainly think that there could be an option for people doing the apprentice track to contribute to peer assessment. My initial instinct would be to focus this on giving qualitative feedback. Scott said that he wants to expand this element of the course, and this could be done without requiring people to go through the lengthy training process. It could also allow people on the apprentice track to ‘dip in’ to give qualitative peer assessment to the extent that they were willing or able each week.

      I like your point about a video-based discussion sub-forum. I reckon discussion could be designed in to the course quite effectively, particularly in terms of evaluation, but also in sharing wider reading or in reflecting on the issues raised and taking them further.

      Was the showcase of any use? I made my work shareable but didn’t think the actual app at its core was good enough to show off! But I’m quite pleased with my overall writeup for each assignment. I can’t currently share assignments 1 or 2, but have just posted a request in the forum asking for a shareable link for these. If I get these I’ll post my assignments here.

      Maybe next time round there could be an area of the site showing all assignments that had been made publicly shareable by their creators. You could have a look at the best ones and the not so successful ones and probably learn a lot in the process. Or maybe people could agree to have theirs as worked examples or discussion points.

      • Thank you for writing such a detailed review. I did enjoy the class myself, although only in its basic form, no time for assignments; plan to practice on my own time in the future. I totally agree with Susan on not having the opportunity to participate in the discussions. The professor was really great and motivated. I believe online education will grow a lot in the future. Hope so. 😉

  3. Pingback: #Change11 An update on posts related to MOOCs as at 14 July 12 | Learner Weblog

  4. Thanks so much for posting such a wonderfully written and thoughtful review of this class. Like you, I took the the studio track, spent 20 plus hours per week on the class, and finished all 5 assignments. It was a lot of work, but I pushed my way through!

    I really can’t say enough great things about the course and the quality of the experience. Great suggestions Susan and Matrix. Hopefully Scott, Chinmay, or one of the other producers of the course see your suggestions and take them under consideration for the next offering of this course.

    I admire the enthusiasm and motivation of all who hung in there through the entire course, on both the “apprentice”, and “studio” track. 🙂

  5. Pingback: Why I’m taking the MOOC MOOC and what I hope to achieve | Reflections

  6. hello,
    i would like to ask author’s permission to republish this excellent review on http://myeducationpath.com/courses/31/Human-Computer+Interaction.htm . I want to gather all feedbacks in one place. I will put backward link to this original post.
    Is this allowed to republish your review?

Leave a comment